There are two truths around which organizations across the world are aligning:
- AI is here to stay, with enormous potential to benefit organizations (and people), and
- Organizations using AI to that full potential must first solve AI’s privacy issues.
Even the most cursory review of technology and business trends will highlight AI as the most important new tool in a company’s arsenal, and the one with the most potential for human beings. Benefits to companies include efficiency, task automation, and improved speed of innovation and decision-making. Consumer benefits also include increased efficiency and task automation, plus the convenience and connectivity that people are beginning to expect.
At the same time, any technology in its initial stages faces problems that its creators and users must solve. AI is no exception. Specifically, privacy issues create significant challenges for responsible AI – ones that consumers, regulators, and ethical companies alike recognize that they must solve before organizations can truly unleash AI’s power.
In this blog, we’ll explore five key privacy priorities for enterprises adopting AI:
- Making privacy a core AI principle: embedding privacy into the foundation of AI strategy and governance.
- Auditing data flows and storage: understanding and monitoring how data moves and where it resides to ensure proper controls.
- Training teams on AI risks: educating employees to prevent well-intentioned but risky decisions.
- Using privacy-preserving AI techniques: leveraging technologies like federated learning and differential privacy.
- Budgeting for AI privacy: allocating resources to support privacy initiatives and maximize ROI.
Real-world AI privacy failures
Recent examples of privacy gone bad related to AI underscore the scope of the AI privacy problem and the criticality of finding its solutions. AI was a central player in a United Kingdom Data Protection Act compliance issue for London’s Royal Free Hospital. In this case, Royal Free shared patient data with DeepMind, an AI Google company, as part of a partnership to create an app related to kidney injury detection, alert, and diagnosis. In a separate case, Microsoft realized that it might not have provided adequate transparency or received appropriate consent regarding a facial recognition AI training data set that it provided to companies. As a result, Microsoft pulled the data set. As one source puts it, “AI has a privacy problem.”
What do people think about privacy and AI?
Consumers as well as regulators care about privacy and AI, as do companies worried about consumer and regulator attention. One United States (U.S.) regulator, the Federal Trade Commission (FTC), uses its Consumer Sentinel Network to analyze search terms and assess what consumers think about. Specific to AI, the FTC has learned that consumers have concerns about how AI is built and trained, the amount of data it requires, and privacy implications.
Similarly, a recent Cisco Data Privacy Benchmark Study reveals that 86% of respondents support privacy legislation, also pointing to consumer concerns about privacy. The same source reports “the growing importance of establishing solid data privacy foundations to unleash the full potential of AI.”
The volatile nature of privacy concerns, combined with the evolution of AI, its safeguards, and uses, mean that there is no single set path for sound AI privacy. However, there are a few key priorities to which privacy, IT, and operational experts can commit that can help any organization navigate the privacy/AI landscape.
One: Make privacy a core AI principle
In any field, the process of setting guiding principles can help individuals, teams, and organizations keep an eye on the Big Few – the list of top must-haves from which other, more tactical decisions arise. Making privacy a core AI principle will help a company avoid losing sight of this critical, but sometimes elusive, requirement. Organizations that make privacy a core AI principle will find it easy to justify the necessary actions, such as:
- Involving privacy legal and compliance stakeholders in the AI design/selection and implementation process.
- Investing in Privacy Enhancing Technologies (PETs) and processes that assist in ensuring sound privacy while still freeing up the necessary data for effective AI.
- Establishing privacy governance and accountability controls, such as review boards, AI committees, and regular audits.
These steps not only help make AI development, training, and deployment more privacy sensitive, but they also contribute to a successful project overall.
For example, the same Cisco study suggests that 96% of orgs see ROI from privacy investments. Other researchers repeatedly confirm that cross functional teams are essential for smooth, effective technology projects, and that organizational control structures can increase compliance and stakeholder engagement. In other words, making privacy a core AI principle will not only help elevate the compliance and success of AI within an organization, but it also stands to benefit the organization itself along with its enabling business goals.
Two: Audit data flows and storage
The foundation to any privacy effort starts with understanding what data fields are involved, where they are located, and where and how they flow into, through, and out of the organization. Privacy specific to AI is no exception. A thoughtful set of decision and thorough understanding about the data flows it uses to either train or deploy AI (or both) will help the organization establish the right security and privacy controls.
For example, the organization will want to identify whether to use internal and/or external data sources, what those data points are and where they sit at rest. The end-to-end data location(s) will have an enormous impact on data rights, security, hygiene, and many other factors. Even more importantly, a deep knowledge and regular audits of the end-to-end data picture will help the organization build assurance of appropriate controls across the logical and physical infrastructure.
Just as in most complicated decisions, there is no single route to success. Interestingly, in the preceding Cisco study, 90% of respondents preferred local storage for better privacy control. At the same time, the same respondents also reported a view that global data hosting providers provide better data protection at an overwhelming 91%. On the face of it, these two points of view seem at a conflict. What this really represents is an acknowledgement of the pros and cons of a complex decision and the importance of a thoughtful end-to-end view and auditing of data, regardless of the location or third-party involvement.
Three: Train teams on AI risks
The biggest enemy of privacy can be the most helpful employee – a paradox where the most well-intentioned individual can make decisions that inadvertently lead to privacy infringements. The same is true with AI, where well-meaning individuals set up AI training or deployment in ways that degrade, rather than enhance, privacy trust, compliance, and risk-mitigation.
In a 2024 Cisco study on the topic, only a little more than half the respondents were familiar with their country’s privacy laws. Yet, in the 2025 Cisco study, nearly half of professionals reported having exposed private data to AI tools. This suggests that training stakeholders on AI, privacy requirements, and privacy/AI best practices can go a long way to avoiding the helpful employee paradox.
Four: Use privacy-preserving AI techniques
Very smart people are trying to solve the AI privacy problem, and there are technologies and techniques available that can help. For example, federated learning and differential privacy are two techniques that are gaining traction in the marketplace.
Federated learning allows an AI app to train across decentralized systems without exchanging data. Local models train on local data, and an iterative process aggregates updates into a global model. On the other hand, differential privacy is a method for releasing statistical information about a set of data while protecting the privacy of any individuals within that set of data. At the highest level, differential privacy techniques make it statistically difficult to determine whether a unique individual’s data is in the data set or not – thus protecting the privacy of that individual.
Regardless of the right combination of privacy techniques and tools, an AI training/deployment team will want to understand the available options to make privacy-sensitive decisions.
Five: Budget for AI privacy
Whether the money is for privacy-preserving techniques/tools, granular consent management, vendor assistance, AI/privacy staff headcount, training, or other activities, budgeting for AI privacy specifically will help an organization get the resources it needs. The good news is that there is a compelling positive ROI on privacy dollars an organization spends. The same Cisco 2025 study reported that 96% of respondents confirm that privacy investments provide them returns above costs, and 99% plan to reallocate privacy funds to AI-specific initiatives. In other words, privacy budget is budget well spent, and more organizations are leveraging privacy funds specifically on AI privacy projects.